<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPEW34M/3U3ETBS</identifier>
		<repository>sid.inpe.br/sibgrapi/2019/09.15.02.10</repository>
		<lastupdate>2019:09.15.02.10.34 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2019/09.15.02.10.34</metadatarepository>
		<metadatalastupdate>2022:06.14.00.09.42 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2019}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI.2019.00043</doi>
		<citationkey>CardenasCernChav:2019:DySiLa</citationkey>
		<title>Dynamic Sign Language Recognition Based on Convolutional Neural Networks and Texture Maps</title>
		<format>On-line</format>
		<year>2019</year>
		<numberoffiles>1</numberoffiles>
		<size>2493 KiB</size>
		<author>Cardenas, Edwin Jonathan Escobedo,</author>
		<author>Cerna, Lourdes Ramirez,</author>
		<author>Chavez, Guillermo Camara,</author>
		<affiliation>Federal University of Ouro Preto</affiliation>
		<affiliation>National University of Ouro Preto</affiliation>
		<affiliation>Federal University of Ouro Preto</affiliation>
		<editor>Oliveira, Luciano Rebouças de,</editor>
		<editor>Sarder, Pinaki,</editor>
		<editor>Lage, Marcos,</editor>
		<editor>Sadlo, Filip,</editor>
		<e-mailaddress>edu.escobedo88@gmail.com</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 32 (SIBGRAPI)</conferencename>
		<conferencelocation>Rio de Janeiro, RJ, Brazil</conferencelocation>
		<date>28-31 Oct. 2019</date>
		<publisher>IEEE Computer Society</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Full Paper</tertiarytype>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>CNN, sign language, texture maps.</keywords>
		<abstract>Sign language recognition (SLR) is a very challenging task due to the complexity of learning or developing descriptors to represent its primary parameters (location, movement, and hand configuration). In this paper, we propose a robust deep learning based method for sign language recognition. Our approach represents multimodal information (RGB-D) through texture maps to describe the hand location and movement. Moreover, we introduce an intuitive method to extract a representative frame that describes the hand shape.   Next, we use this information as inputs to two three-stream and two-stream CNN models to learn robust features capable of recognizing a dynamic sign. We conduct our experiments on two sign language datasets, and the comparison with state-of-the-art SLR methods reveal the superiority of our approach which optimally combines texture maps and hand shape for SLR tasks.</abstract>
		<language>en</language>
		<targetfile>PID111.pdf</targetfile>
		<usergroup>edu.escobedo88@gmail.com</usergroup>
		<visibility>shown</visibility>
		<documentstage>not transferred</documentstage>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<nexthigherunit>8JMKD3MGPEW34M/3UA4FNL</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/3UA4FPS</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/4742MCS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2019/10.25.18.30.33 14</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<username>edu.escobedo88@gmail.com</username>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2019/09.15.02.10</url>
	</metadata>
</metadatalist>